Thompson Sampling for Contextual Bandits with Linear Payoffs
ثبت نشده
چکیده
The following lemma is implied by Theorem 1 in Abbasi-Yadkori et al. (2011): Lemma 7. (Abbasi-Yadkori et al., 2011) Let (F ′ t; t ≥ 0) be a filtration, (mt; t ≥ 1) be an R-valued stochastic process such that mt is (F ′ t−1)-measurable, (ηt; t ≥ 1) be a real-valued martingale difference process such that ηt is (F ′ t)-measurable. For t ≥ 0, define ξt = ∑t τ=1mτητ and Mt = Id + ∑t τ=1mτm T τ , where Id is the d-dimensional identity matrix. Assume ηt is conditionally R-sub-Gaussian.
منابع مشابه
A Survey on Contextual Multi-armed Bandits
4 Stochastic Contextual Bandits 6 4.1 Stochastic Contextual Bandits with Linear Realizability Assumption . . . . 6 4.1.1 LinUCB/SupLinUCB . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.1.2 LinREL/SupLinREL . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.1.3 CofineUCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.1.4 Thompson Sampling with Linear Payoffs...
متن کاملThompson Sampling for Contextual Bandits with Linear Payoffs
Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the stateof-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we d...
متن کاملCBRAP: Contextual Bandits with RAndom Projection
Contextual bandits with linear payoffs, which are also known as linear bandits, provide a powerful alternative for solving practical problems of sequential decisions, e.g., online advertisements. In the era of big data, contextual data usually tend to be high-dimensional, which leads to new challenges for traditional linear bandits mostly designed for the setting of low-dimensional contextual d...
متن کاملGeneralized Thompson Sampling for Contextual Bandits
Thompson Sampling, one of the oldest heuristics for solving multi-armed bandits, has recently been shown to demonstrate state-of-the-art performance. The empirical success has led to great interests in theoretical understanding of this heuristic. In this paper, we approach this problem in a way very different from existing efforts. In particular, motivated by the connection between Thompson Sam...
متن کاملCustomized Nonlinear Bandits for Online Response Selection in Neural Conversation Models
Dialog response selection is an important step towards natural response generation in conversational agents. Existing work on neural conversational models mainly focuses on offline supervised learning using a large set of context-response pairs. In this paper, we focus on online learning of response selection in retrieval-based dialog systems. We propose a contextual multi-armed bandit model wi...
متن کامل